Score...
Okay, how did we do that? This page explains the method I used to calculate the scores of the books. I'll try to use as few statistical buzz-words as possible, but there will be some math involved. My apologies.
Process...
- Reviewers were asked to send in nominations. There were 196 books, series, trilogies, and comic series nominated.
- Reviewers received the complete nomination list and rated each book from -5 (worst) to 5 (best). This is 11 numbers, with 0 as the average. There were 10 reviewers. Each reviewer only voted for books they had read (I hope).
- Due to the quality of books nominated, or reviewer bias, the average rating was not 0. In fact, the average fluctuated from about 1.42 to 3.48, depending on the reviewer. To refrain from penalizing books liked by reviewers who prefered lower scores, and to prevent score inflation, I took the average and standard deviation for each reviewer. Standard deviation is a measure of how spread out the scores were- a higher standard deviation means a person used more of the scale and their scores fluctuated more widely.
- Here's the math part: Using Microsoft Excel (you think I could do this by hand? No way!), I took a single reviewer's scores and subtracted that reviewer's average from all of them. This gave me positive and negative scores in relation to the average score that reviewer gave, rather than in relation to 0. This compensated for the difference between reviewer averages that varied from 1.42 and 3.28, for example.
- Next I had to fix the problem where some reviewers used the whole scale while others only used a tiny part of it. I took the new scores and divided by the standard deviation for each reviewer. If the reviewer's average was 3 and her standard deviation was 1, a book that got a score of 4 from her would now have a score of 1, and a book that received a 2 would now have a -1.
- This year, I did not average these scores. Why? Because if I average the scores, it makes little difference if 5 people voted on a book or 1 person did. If I simply add them, books with more people reviewing them that are positively rated have a better chance of moving up on the list. Books read by many people which are negatively rated will tend to move down. This seems a bit more fair than last year's average system.
- To make the scores easier to read, I multiplied them all times 100. Voila. The numbers this year range from about 22 (or .22) to 600 (or 6.0), though these cannot really be used in comparison to last year's scores, because they are not divided by the number of reviewers.
Cute Little Facts...
(or, I have too much time on my hands)
- No book had been read by all the reviewers. Two books, A Wrinkle in Time and The Hobbit, had been read by 9 reviewers. 6 books had been ready by 8 reviewers, and 6 by seven reviewers. There were many fewer books rated by only one reviewer this year than last year.
- Only 5 reviewers nominated anything this year, but 10 voted. Three books had to be dropped from the final list because, though they were nominated, no one rated them!
- Only 28 authors on the final list have not yet been reviewed on our pages. This is a distinct improvement over last year...
This page owned by: Raven
Questions? Comments? Smart Remarks? Email me at
[email protected]
Last Updated: February 29, 2000
(Main Page)
(Categories)
(Awards)
(Authors)
(Titles)
(Top 100)
(Rewrites)
Background image created by Queen's Free World.
|